regulatory market
Regulatory Markets: The Future of AI Governance
Hadfield, Gillian K., Clark, Jack
Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers' stated objectives.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (7 more...)
- Overview (0.92)
- Research Report > Promising Solution (0.67)
- Transportation > Ground > Road (1.00)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)
Both eyes open: Vigilant Incentives help Regulatory Markets improve AI Safety
Bova, Paolo, Di Stefano, Alessandro, Han, The Anh
In the context of rapid discoveries by leaders in AI, governments must consider how to design regulation that matches the increasing pace of new AI capabilities. Regulatory Markets for AI is a proposal designed with adaptability in mind. It involves governments setting outcome-based targets for AI companies to achieve, which they can show by purchasing services from a market of private regulators. We use an evolutionary game theory model to explore the role governments can play in building a Regulatory Market for AI systems that deters reckless behaviour. We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal. These 'Bounty Incentives' only reward private regulators for catching unsafe behaviour. We argue that AI companies will likely learn to tailor their behaviour to how much effort regulators invest, discouraging regulators from innovating. Instead, we recommend that governments always reward regulators, except when they find that those regulators failed to detect unsafe behaviour that they should have. These 'Vigilant Incentives' could encourage private regulators to find innovative ways to evaluate cutting-edge AI systems.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York > Monroe County > Rochester (0.04)
- Government (1.00)
- Law > Statutes (0.68)
- Banking & Finance (0.67)
An AI regulation strategy that could really work
When even the companies developing AI themselves agree with the need for regulation, it is time to stop discussing abstract principles and get down to the business of how to regulate a rapidly advancing technology landscape. It is clear that our regulatory system needs an update. If we try to regulate 21st century technology and beyond with 20th century tools, we'll get none of the benefits of regulation and all of the downsides. So, if we need to reinvent the rules to keep pace with the technological change advanced by the likes of Google, Amazon, and Facebook, where do we start? Google's Sundar Pichai is right that technology companies cannot simply build AI and leave it to the will of the market. But what we can do is try to use the best traits of markets -- competition, transparency, rapid iteration -- to reform our regulatory system.
- Information Technology (1.00)
- Government (1.00)
- Law > Statutes (0.41)